29 research outputs found

    Robustness of Learning That Is Based on Covariance-Driven Synaptic Plasticity

    Get PDF
    It is widely believed that learning is due, at least in part, to long-lasting modifications of the strengths of synapses in the brain. Theoretical studies have shown that a family of synaptic plasticity rules, in which synaptic changes are driven by covariance, is particularly useful for many forms of learning, including associative memory, gradient estimation, and operant conditioning. Covariance-based plasticity is inherently sensitive. Even a slight mistuning of the parameters of a covariance-based plasticity rule is likely to result in substantial changes in synaptic efficacies. Therefore, the biological relevance of covariance-based plasticity models is questionable. Here, we study the effects of mistuning parameters of the plasticity rule in a decision making model in which synaptic plasticity is driven by the covariance of reward and neural activity. An exact covariance plasticity rule yields Herrnstein's matching law. We show that although the effect of slight mistuning of the plasticity rule on the synaptic efficacies is large, the behavioral effect is small. Thus, matching behavior is robust to mistuning of the parameters of the covariance-based plasticity rule. Furthermore, the mistuned covariance rule results in undermatching, which is consistent with experimentally observed behavior. These results substantiate the hypothesis that approximate covariance-based synaptic plasticity underlies operant conditioning. However, we show that the mistuning of the mean subtraction makes behavior sensitive to the mistuning of the properties of the decision making network. Thus, there is a tradeoff between the robustness of matching behavior to changes in the plasticity rule and its robustness to changes in the properties of the decision making network

    Spike Timing Dependent Plasticity Finds the Start of Repeating Patterns in Continuous Spike Trains

    Get PDF
    Experimental studies have observed Long Term synaptic Potentiation (LTP) when a presynaptic neuron fires shortly before a postsynaptic neuron, and Long Term Depression (LTD) when the presynaptic neuron fires shortly after, a phenomenon known as Spike Timing Dependant Plasticity (STDP). When a neuron is presented successively with discrete volleys of input spikes STDP has been shown to learn ‘early spike patterns’, that is to concentrate synaptic weights on afferents that consistently fire early, with the result that the postsynaptic spike latency decreases, until it reaches a minimal and stable value. Here, we show that these results still stand in a continuous regime where afferents fire continuously with a constant population rate. As such, STDP is able to solve a very difficult computational problem: to localize a repeating spatio-temporal spike pattern embedded in equally dense ‘distractor’ spike trains. STDP thus enables some form of temporal coding, even in the absence of an explicit time reference. Given that the mechanism exposed here is simple and cheap it is hard to believe that the brain did not evolve to use it

    Order-Based Representation in Random Networks of Cortical Neurons

    Get PDF
    The wide range of time scales involved in neural excitability and synaptic transmission might lead to ongoing change in the temporal structure of responses to recurring stimulus presentations on a trial-to-trial basis. This is probably the most severe biophysical constraint on putative time-based primitives of stimulus representation in neuronal networks. Here we show that in spontaneously developing large-scale random networks of cortical neurons in vitro the order in which neurons are recruited following each stimulus is a naturally emerging representation primitive that is invariant to significant temporal changes in spike times. With a relatively small number of randomly sampled neurons, the information about stimulus position is fully retrievable from the recruitment order. The effective connectivity that makes order-based representation invariant to time warping is characterized by the existence of stations through which activity is required to pass in order to propagate further into the network. This study uncovers a simple invariant in a noisy biological network in vitro; its applicability under in vivo constraints remains to be seen

    Balancing Feed-Forward Excitation and Inhibition via Hebbian Inhibitory Synaptic Plasticity

    Get PDF
    It has been suggested that excitatory and inhibitory inputs to cortical cells are balanced, and that this balance is important for the highly irregular firing observed in the cortex. There are two hypotheses as to the origin of this balance. One assumes that it results from a stable solution of the recurrent neuronal dynamics. This model can account for a balance of steady state excitation and inhibition without fine tuning of parameters, but not for transient inputs. The second hypothesis suggests that the feed forward excitatory and inhibitory inputs to a postsynaptic cell are already balanced. This latter hypothesis thus does account for the balance of transient inputs. However, it remains unclear what mechanism underlies the fine tuning required for balancing feed forward excitatory and inhibitory inputs. Here we investigated whether inhibitory synaptic plasticity is responsible for the balance of transient feed forward excitation and inhibition. We address this issue in the framework of a model characterizing the stochastic dynamics of temporally anti-symmetric Hebbian spike timing dependent plasticity of feed forward excitatory and inhibitory synaptic inputs to a single post-synaptic cell. Our analysis shows that inhibitory Hebbian plasticity generates ‘negative feedback’ that balances excitation and inhibition, which contrasts with the ‘positive feedback’ of excitatory Hebbian synaptic plasticity. As a result, this balance may increase the sensitivity of the learning dynamics to the correlation structure of the excitatory inputs

    Context Matters: The Illusive Simplicity of Macaque V1 Receptive Fields

    Get PDF
    Even in V1, where neurons have well characterized classical receptive fields (CRFs), it has been difficult to deduce which features of natural scenes stimuli they actually respond to. Forward models based upon CRF stimuli have had limited success in predicting the response of V1 neurons to natural scenes. As natural scenes exhibit complex spatial and temporal correlations, this could be due to surround effects that modulate the sensitivity of the CRF. Here, instead of attempting a forward model, we quantify the importance of the natural scenes surround for awake macaque monkeys by modeling it non-parametrically. We also quantify the influence of two forms of trial to trial variability. The first is related to the neuron’s own spike history. The second is related to ongoing mean field population activity reflected by the local field potential (LFP). We find that the surround produces strong temporal modulations in the firing rate that can be both suppressive and facilitative. Further, the LFP is found to induce a precise timing in spikes, which tend to be temporally localized on sharp LFP transients in the gamma frequency range. Using the pseudo R[superscript 2] as a measure of model fit, we find that during natural scene viewing the CRF dominates, accounting for 60% of the fit, but that taken collectively the surround, spike history and LFP are almost as important, accounting for 40%. However, overall only a small proportion of V1 spiking statistics could be explained (R[superscript 2]~5%), even when the full stimulus, spike history and LFP were taken into account. This suggests that under natural scene conditions, the dominant influence on V1 neurons is not the stimulus, nor the mean field dynamics of the LFP, but the complex, incoherent dynamics of the network in which neurons are embedded.National Institutes of Health (U.S.) (K25 NS052422-02)National Institutes of Health (U.S.) (DP1 ODOO3646

    Learning in spatially extended dendrites

    Get PDF
    Dendrites are not static structures, new synaptic connections are established and old ones disappear. Moreover, it is now known that plasticity can vary with distance from the soma [1]. Consequently it is of great interest to combine learning algorithms with spatially extended neuron models. In particular this may shed further light on the computational advantages of plastic dendrites, say for direction selectivity or coincidence detection. Direction selective neurons fire for one spatio-temporal input sequence on their dendritic tree but stay silent if the temporal order is reversed [2], whilst "coincidence-detectors" such as those in the auditory brainstem are known to make use of dendrites to detect temporal differences in sound arrival times between ears to an astounding accuracy [3]. Here we develop one such combination of learning and dendritic dynamics by extending the "Spike-Diffuse-Spike" [4] framework of an active dendritic tree to incorporate both artificial (tempotron style [5]) and biological learning rules (STDP style [2])

    Temporal compression mediated by short-term synaptic plasticity

    No full text
    Time scales of cortical neuronal dynamics range from few milliseconds to hundreds of milliseconds. In contrast, behavior occurs on the time scale of seconds or longer. How can behavioral time then be neuronally represented in cortical networks? Here, using electrophysiology and modeling, we offer a hypothesis on how to bridge the gap between behavioral and cellular time scales. The core idea is to use a long time constant of decay of synaptic facilitation to translate slow behaviorally induced temporal correlations into a distribution of synaptic response amplitudes. These amplitudes can then be transferred to a sequence of action potentials in a population of neurons. These sequences provide temporal correlations on a millisecond time scale that are able to induce persistent synaptic changes. As a proof of concept, we provide simulations of a neuron that learns to discriminate temporal patterns on a time scale of seconds by synaptic learning rules with a millisecond memory buffer. We find that the conversion from synaptic amplitudes to millisecond correlations can be strongly facilitated by subthreshold oscillations both in terms of information transmission and success of learning
    corecore